MoE Technology has announced the open-source release of its independently developed audio understanding large model, MooER, which is the first large-scale open-source speech model trained and inferred on domestically produced full-featured GPUs. MooER was trained on a large-scale audio dataset in just 38 hours on the MoE KuaE Intelligence Computing Platform, demonstrating excellent performance in Chinese and English speech recognition, as well as Chinese-to-English speech translation, achieving an impressive BLEU score of 25.2 in the Covost2 Chinese-to-English test set, nearing industrial-grade effectiveness. MoE Technology plans to further open-source the training code.